Performance information is crucial for high-performance government. Performance budgeting is part of that: budget decision-making needs to be informed by good information about the effectiveness and efficiency of government spending.
The approach to performance information must, however, be highly strategic. To succeed, it needs to be built on a recognition of two fundamental realities. Firstly, there are considerable limits to our ability to measure and analyze the effectiveness and efficiency of expenditure. Secondly, producing more performance information does not mean that the information will be used.
A recent report from the Tony Blair Institute in the UK appears not to grasp these essential points. The Institute envisages building a government-wide performance dashboard in which all of the outcomes of government programs will be reported in “close to real time.” This will mean that “failure will have nowhere to hide.” Armed with this “comprehensive shared visibility of spending against outcomes,” the Ministry of Finance (HM Treasury in the UK) will supposedly be empowered to make continuous decisions about what programs to eliminate or ramp up. No need then for multi-year budgets of the sort that the UK has at present – budgeting will be done on a continuous basis.
This vision of budgeting driven by an outcomes dashboard is, unfortunately, pure fantasy. Governments should certainly arm themselves with the best practical set of outcome indicators. But surely we don’t need to be reminded that quite a few government outcomes are unmeasurable (e.g. the level of national security) or are – because of the extensive influence of “contextual factors” – only very imperfectly measurable (e.g. changes in crime rates are not a good measure of the effectiveness of policing). As for reporting outcomes in “close to real-time,” are we going to test the literacy and numeracy levels of school children on a weekly or daily basis? Will there be daily household surveys to measure the unemployment rate? Or the prevalence of undesirable health behaviors such as smoking? A moment’s thought makes it obvious that many of the outcomes that are important to government can only be measured periodically.
Implicit in the Blair Institute’s vision is what might be called the perfect information illusion – that it is possible to scientifically measure the effectiveness and efficiency of all government expenditure. The reality is, however, that what the economists call “imperfect information” is a fundamental reality of life in government as elsewhere.
Taking a Strategic Approach to Performance Information
What does it mean, then, to take a strategic approach to performance information? How is it that we can increase the role that performance information plays in budgeting and government-wide performance management, while recognizing its inherent limitations?
Here are some key principles:
- Put more emphasis on using performance information, not just producing it
Governments in many advanced countries have spent massively on developing better performance information over past decades. Regrettably, much of the information that has been produced is never used, or used relatively little. Supply has run ahead of demand. So we need to focus more on how to ensure that this valuable information is used.
In a budgeting context, this means building more systematic processes for reviewing performance as part of the budget preparation process, to ensure that performance is more systematically taken into account in resource allocation decisions. It also means carrying out more spending reviews, using available performance information. (Note that here I am referring to spending review in the international sense of the systematic review of baseline expenditure, not in the UK sense of the preparation of the multi-annual budget.)
2. Be highly selective in the choice of performance indicators.
We need to focus on indicators that have demonstrable decision-making relevance, and for which we can justify the cost of collecting and verifying the data (which can be considerable). There should be no assumption that the more indicators, the better.
For budgeting and government-wide performance management, outcome and output indicators are what is most important. The primary focus should be on those areas of government where outcomes and outputs are most measurable. This means particularly areas of service provision to individual citizens, such as education and health.
There needs to be a better recognition of the difference between the performance indicators that are relevant for internal management within government agencies (many input and activity measures) and those that are relevant for government-wide budgeting and performance management (outcomes and outputs). Too often, the mistake is made of throwing a whole lot of internal management indicators at political leaders and parliaments.
3. Provide narrative advice on how to interpret performance indicators
Performance indicators have great potential to mislead as well as to inform. An outcome indicator might look bad for reasons that have nothing to do with the effectiveness of what government is doing. Crime rates might, for example, be going up for long-term social and economic reasons even though policing is becoming more effective. So when outcome indicators are presented – in performance reports or “dashboards” – there should always be brief discussions of the “contextual factors” that may be influencing them.
The same applies to some output indicators. Unit cost measures, for example, are potentially a very valuable measure of efficiency. But it is often not possible to see whether efficiency is improving or deteriorating simply by looking at the time trend of unit costs. Analysis of other influences that might be involved (such as changing input prices, or changes in average case complexity) is frequently required.
More generally, we need to be aware of the potential “perverse effects” that performance indicators – and even more, performance targets – may generate. This refers to people making performance indicators look better by doing things that reduce efficiency or effectiveness.
4. Be even more selective in using evaluation
Evaluation – applied to the right topics – is a very useful part of the performance information armory. However, it is expensive, and its ability to yield robust conclusions about the effectiveness of government programs is widely exaggerated. So the choice of evaluation topics and the tasks assigned to evaluators in conducting those evaluations should be carefully chosen.
The proposition that all government programs should be regularly subjected to evaluations is misguided. A few governments have tried this in the past – for example, Australia and Canada. It turned out to be a major waste of money that was quickly abandoned.
5. Increase the role of other forms of systematic performance analysis
Efficiency is at least as important as effectiveness, and evaluation isn’t much good for efficiency analysis. We need more reliance on other techniques of efficiency analysis, such as cost analysis and business process analysis. However, the same caveat applies as for evaluation: these analytic methodologies should be applied highly selectively.